Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.
translated by 谷歌翻译
准确可靠的3D检测对于包括自动驾驶车辆和服务机器人在内的许多应用至关重要。在本文中,我们提出了一个具有点云序列的3D时间对象检测的灵活且高性能的3D检测框架,称为MPPNET。我们提出了一个新颖的三级结构框架,其中包含多帧特征编码和相互作用的代理点,以实现更好的检测。这三个层次结构分别进行每个帧的特征编码,短片特征融合和整个序列特征聚合。为了使用合理的计算资源来处理长期序列云,提出了组内特征混合和组间特征的注意,以形成第二和第三个特征编码层次结构,这些层次结构均经常应用于聚集多框架轨迹特征。代理不仅可以充当每个帧的一致对象表示,而且还充当了方便框架之间特征交互的快递。大型Waymo打开数据集的实验表明,当应用于短(例如4框架)和长(例如16框架)点云序列时,我们的方法优于具有较大边缘的最先进方法。代码可在https://github.com/open-mmlab/openpcdet上找到。
translated by 谷歌翻译
阅读理解是一个复杂的认知过程,涉及许多人类大脑活动。大量作品研究了在信息检索相关方案中阅读理解的模式和注意力分配。但是,关于阅读理解过程中人脑中发生的事情以及这些认知活动如何影响信息检索过程,知之甚少。此外,随着脑成像技术(例如脑电图(EEG))的进步,几乎可以实时收集大脑信号,并探索是否可以用作反馈来促进信息获取性能。在本文中,我们仔细设计了一项基于实验室的用户研究,以调查阅读理解过程中的大脑活动。我们的发现表明,神经反应随着不同类型的阅读内容而变化,即可以满足用户信息需求和无法无法满足的内容的内容。我们建议在阅读理解过程中以微观时间量表以微观时间量表来支持各种认知活动,例如认知负载,语义主题理解和推论处理。从这些发现中,我们说明了一些有关信息检索任务的见解,例如排名模型构建和界面设计。此外,我们建议有可能检测主动现实世界系统的阅读理解状态。为此,我们为基于脑电图的阅读理解建模(UERCM)提出了一个统一的框架。为了验证其有效性,我们基于脑电图特征进行了大量的实验,以进行两项阅读理解任务:回答句子分类和回答提取。结果表明,通过大脑信号提高两个任务的性能是可行的。
translated by 谷歌翻译
Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities. However, ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training. This results in ViT not performing as well as CNNs on small datasets like medicine and science. We experimentally found that masked autoencoders (MAE) can make the transformer focus more on the image itself, thus alleviating the data-hungry issue of ViT to some extent. Yet the current MAE model is too complex resulting in over-fitting problems on small datasets. This leads to a gap between MAEs trained on small datasets and advanced CNNs models still. Therefore, we investigated how to reduce the decoder complexity in MAE and found a more suitable architectural configuration for it with small datasets. Besides, we additionally designed a location prediction task and a contrastive learning task to introduce localization and invariance characteristics for MAE. Our contrastive learning task not only enables the model to learn high-level visual information but also allows the training of MAE's class token. This is something that most MAE improvement efforts do not consider. Extensive experiments have shown that our method shows state-of-the-art performance on standard small datasets as well as medical datasets with few samples compared to the current popular masked image modeling (MIM) and vision transformers for small datasets.The code and models are available at https://github.com/Talented-Q/SDMAE.
translated by 谷歌翻译
Facial expression recognition (FER) plays a significant role in the ubiquitous application of computer vision. We revisit this problem with a new perspective on whether it can acquire useful representations that improve FER performance in the image generation process, and propose a novel generative method based on the image inversion mechanism for the FER task, termed Inversion FER (IFER). Particularly, we devise a novel Adversarial Style Inversion Transformer (ASIT) towards IFER to comprehensively extract features of generated facial images. In addition, ASIT is equipped with an image inversion discriminator that measures the cosine similarity of semantic features between source and generated images, constrained by a distribution alignment loss. Finally, we introduce a feature modulation module to fuse the structural code and latent codes from ASIT for the subsequent FER work. We extensively evaluate ASIT on facial datasets such as FFHQ and CelebA-HQ, showing that our approach achieves state-of-the-art facial inversion performance. IFER also achieves competitive results in facial expression recognition datasets such as RAF-DB, SFEW and AffectNet. The code and models are available at https://github.com/Talented-Q/IFER-master.
translated by 谷歌翻译
Compared with the vanilla transformer, the window-based transformer offers a better trade-off between accuracy and efficiency. Although the window-based transformer has made great progress, its long-range modeling capabilities are limited due to the size of the local window and the window connection scheme. To address this problem, we propose a novel Token Transformer (TT). The core mechanism of TT is the addition of a Class (CLS) token for summarizing window information in each local window. We refer to this type of token interaction as CLS Attention. These CLS tokens will interact spatially with the tokens in each window to enable long-range modeling. In order to preserve the hierarchical design of the window-based transformer, we designed Feature Inheritance Module (FIM) in each phase of TT to deliver the local window information from the previous phase to the CLS token in the next phase. In addition, we have designed a Spatial-Channel Feedforward Network (SCFFN) in TT, which can mix CLS tokens and embedded tokens on the spatial domain and channel domain without additional parameters. Extensive experiments have shown that our TT achieves competitive results with low parameters in image classification and downstream tasks.
translated by 谷歌翻译
探针车的使用日益增长会产生大量的GNS数据。受卫星定位技术的限制,进一步提高地图匹配的准确性是具有挑战性的工作,尤其是对于低频轨迹。当与轨迹匹配时,自我车辆的当前旅行时空信息对于数据量最少而言最有用。此外,还有大量其他数据,例如其他车辆的状态和过去的预测结果,但是很难提取有用的信息来匹配地图和推断路径。大多数地图匹配研究仅使用自我车辆的数据,而忽略了其他车辆的数据。基于它,本文设计了一种新的地图匹配方法,以充分利用“大数据”。首先,我们根据与本匹配探针的空间和时间距离将所有数据分为四组,这使我们能够对其有用性进行排序。然后,我们设计了三种不同的方法来从它们中提取有价值的信息(分数):速度和轴承的分数,历史用法的分数以及使用光谱图马尔可夫中立网络的交通状态分数。最后,我们使用修改后的TOP-K最短路径方法来搜索椭圆区域内的候选路径,然后使用Fused分数推断路径(投影位置)。我们使用中国的现实世界数据集测试了针对基线算法的建议方法。结果表明,所有评分方法都可以增强地图匹配的精度。此外,我们的方法优于其他方法,尤其是当GNSS探测频率小于0.01 Hz时。
translated by 谷歌翻译
神经网络实施的标准方法具有强大的功能近似功能,但在其预测中学习元表示和理性概率不确定性的能力受到限制。另一方面,高斯流程采用贝叶斯学习计划来估计这种不确定性,但受其效率和近似能力的限制。神经过程家族(NPF)打算通过利用神经网络来提供元学习预测性不确定性来提供两全其美的世界。近年来,这种潜力为家庭带来了重大的研究活动。因此,需要对NPF模型进行全面调查,以组织和联系其动机,方法论和实验。本文打算解决这一差距,同时更深入地研究有关家庭成员的制定,研究主题和应用程序。我们阐明了它们的潜力,即在一个雨伞下带来其他深度学习领域的最新进展。然后,我们提供了对家庭的严格分类法,并从经验上证明了它们对在1-D,2-D和3-D输入域上运行的数据生成功能进行建模的功能。最后,我们通过讨论有关有希望的方向的观点,这些方向可以推动该领域的研究进展。我们的实验代码将在https://github.com/srvcodes/neural-processes-survey上提供。
translated by 谷歌翻译
随着软件量表和复杂性的快速增长,将大量错误报告提交到错误跟踪系统中。为了加快缺陷维修的速度,需要对这些报告进行准确的分类,以便可以将其发送给适当的开发人员。但是,现有的分类方法仅使用错误报告的文本信息,从而导致其性能较低。为了解决上述问题,本文提出了一种用于错误报告的新自动分类方法。创新是,当对错误报告进行分类时,除了使用报告的文本信息外,还考虑了报告的意图(即建议或解释),从而提高了分类的性能。首先,我们从四个生态系统(Apache,Eclipse,Gentoo,Mozilla)收集错误报告,并手动注释它们以构建实验数据集。然后,我们使用自然语言处理技术来预处理数据。在此基础上,BERT和TF-IDF用于提取意图的功能和多个文本信息。最后,这些功能用于训练分类器。对五个分类器(包括k-nearest邻居,天真的贝叶斯,逻辑回归,支持向量机和随机森林)的实验结果表明,我们提出的方法可实现更好的性能,其F量度从87.3%达到95.5%。
translated by 谷歌翻译
基于自动编码器的深度子空间聚类(DSC)广泛用于计算机视觉,运动分割和图像处理。但是,它在自我表达的矩阵学习过程中遇到了以下三个问题:由于简单的重建损失,第一个对于学习自我表达权重的信息较小;第二个是与样本量相关的自我表达层的构建需要高计算成本。最后一个是现有正规化条款的有限连接性。为了解决这些问题,在本文中,我们提出了一个新颖的模型,名为“自我监督的深度”子空间聚类(S $^{3} $ CE)。具体而言,S $^{3} $ CE利用了自我监督的对比网络,以获得更加繁荣的特征向量。原始数据的局部结构和密集的连接受益于自我表达层和附加熵 - 标准约束。此外,具有数据增强的新模块旨在帮助S $^{3} $ CE专注于数据的关键信息,并通过光谱聚类来提高正面和负面实例的聚类性能。广泛的实验结果表明,与最先进的方法相比,S $^{3} $ CE的出色性能。
translated by 谷歌翻译